Another Driver Died in a Tesla Autopilot

, , , 
We need to keep scrutinizing the technology—and whether drivers are too easily lulled into trusting it more than they should.
Autopilot might not be the selling point Tesla’s made it out to be.
Arnd Wiegmann/Reuters


Friday night’s admission by Tesla that its autopilot mode was activated during a deadly crash by a Model X SUV earlier this month is bad for Tesla, bad for the cause of self-driving cars, and certainly bad for anyone who rides in semiautonomous or autonomous vehicles or shares the road with them. On March 23, the vehicle hit a barrier on Highway 101 near Mountain View, California, then caught fire and was hit by two other vehicles. The driver died. In an earlier statement about the incident, before Tesla had been able to retrieve the SUV’s logs, the electric-vehicle company was preemptively defensive of its technology, stressing that while autopilot can’t prevent all accidents, it makes them “less likely to occur.” The Friday-evening update offers a fuller, but by no means complete, picture of what happened:
In the moments before the collision, which occurred at 9:27 a.m. on Friday, March 23rd, Autopilotwas engaged with the adaptive cruise control follow-distance set to minimum. The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision. The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken.
The reason this crash was so severe is because the crash attenuator, a highway safety barrier which is designed to reduce the impact into a concrete lane divider, had been crushed in a prior accident without being replaced. We have never seen this level of damage to a Model X in any other crash.
According to the Wall Street Journal, the National Transportation Safety Board is investigating the incident.

The crash took place five days after a self-driving Uber being tested in Arizona killed a pedestrian at night—leading Uber to suspend its tests of the vehicles everywhere, and Arizona to suspend testing in the state by Uber. The first fatality caused by a self-driving car, it has inspired louder calls for the nascent technology to be strictly regulated as companies race to perfect it.

Tesla’s autopilot is a less sophisticated technology than the fully autonomous systems that Uber, Google sister company Waymo, and others are developing, one that’s not meant to be used without an alert driver. But that hasn’t stopped Tesla owners from occasionally behaving recklessly with the feature. That’s another key difference between Tesla’s tech and Uber’s: Anyone who can afford to buy a Tesla can use it.

The March 23 crash is the second known fatal incident involving Tesla’s autopilot, following a 2016 collision between a Model S and a tractor-trailer that killed the former’s driver. In that crash, as in the one last week, the driver appears not to have had his hands on the wheel, and the NTSB split blame for the fatality between shortcomings of Tesla’s technology and the driver of the tractor-trailer.

We still don’t know all the details of the crash on Highway 101—like what the driver was doing and whether the autopilot failed in any way—though it will certainly receive a great deal of scrutiny at a time when many aspects of Tesla’s business are being prodded for weaknesses. The company’s stock got pummeled all week as bad news continued to pile up for the manufacturer. On Thursday it recalled 123,000 Model S vehicles—almost half of the cars it’s ever sold—over an issue with power-steering bolts that it will retrofit. Perhaps most worryingly for the business, it is still struggling with manufacturing delays for the Model 3, Tesla’s mass-market electric car, whose production timeline has been consistently overestimated by CEO Elon Musk. And there’s reason to believe the company is running out of cash.

All of which might help explain why Tesla’s latest update on the March 23 crash, while stressing regret for the lost life, contains more than a hint of exasperation over yet another instance in which the company must defend a technology that it believes, probably rightly, will improve road safety once perfected:


In the past, when we have brought up statistical safety points, we have been criticized for doing so, implying that we lack empathy for the tragedy that just occurred. Nothing could be further from the truth. We care deeply for and feel indebted to those who chose to put their trust in us. However, we must also care about people now and in the future whose lives may be saved if they know that Autopilot improves safety. None of this changes how devastating an event like this is or how much we feel for our customer’s family and friends. We are incredibly sorry for their loss. 
But it’s also possible to believe in the promise of self-driving cars while worrying about the ways in which the technology is currently being deployed—particularly the notion that a human driver and a self-driving vehicle might compensate for each other’s shortcomings. The latter is still inexperienced, while the former is too easily lulled to inattention. And it’s becoming more apparent that the combination of the two can sometimes be deadly.